AAAI AI-Alert for Feb 19, 2019
The Secret History of Women in Coding
As a teenager in Maryland in the 1950s, Mary Allen Wilkes had no plans to become a software pioneer -- she dreamed of being a litigator. One day in junior high in 1950, though, her geography teacher surprised her with a comment: "Mary Allen, when you grow up, you should be a computer programmer!" Wilkes had no idea what a programmer was; she wasn't even sure what a computer was. The first digital computers had been built barely a decade earlier at universities and in government labs. By the time she was graduating from Wellesley College in 1959, she knew her legal ambitions were out of reach. Her mentors all told her the same thing: Don't even bother applying to law school.
Can we trust scientific discoveries made using machine learning?
Rice University statistician Genevera Allen says scientists must keep questioning the accuracy and reproducibility of scientific discoveries made by machine-learning techniques until researchers develop new computational systems that can critique themselves. Allen, associate professor of statistics, computer science and electrical and computer engineering at Rice and of pediatrics-neurology at Baylor College of Medicine, will address the topic in both a press briefing and a general session today at the 2019 Annual Meeting of the American Association for the Advancement of Science (AAAS). "The question is, 'Can we really trust the discoveries that are currently being made using machine-learning techniques applied to large data sets?'" "The answer in many situations is probably, 'Not without checking,' but work is underway on next-generation machine-learning systems that will assess the uncertainty and reproducibility of their predictions." Machine learning (ML) is a branch of statistics and computer science concerned with building computational systems that learn from data rather than following explicit instructions. Allen said much attention in the ML field has focused on developing predictive models that allow ML to make predictions about future data based on its understanding of data it has studied.
Sci-Fi Author Robert Heinlein Was Basically MacGyver
Robert Heinlein is the legendary author of such classic works as Starship Troopers, The Moon Is a Harsh Mistress, and Stranger in a Strange Land. His books have influenced generations of artists and scientists, including physicist and science fiction writer Gregory Benford. "He was one of the people who propelled me forward to go into the sciences," Benford says in Episode 348 of the Geek's Guide to the Galaxy podcast. "Because his depiction of the prospect of the future of science, engineering--everything--was so enticing. He was my favorite science fiction writer."
The technology behind OpenAI's fiction-writing, fake-news-spewing AI, explained
So convincing, in fact, that the researchers have refrained from open-sourcing the code, in hopes of stalling its potential weaponization as a means of mass-producing fake news. An OpenAI employee printed out this AI-written sample and posted it by the recycling bin: https://t.co/PT8CMSU2AR While the impressive results are a remarkable leap beyond what existing language models have achieved, the technique involved isn't exactly new. Instead, the breakthrough was driven primarily by feeding the algorithm ever more training data--a trick that has also been responsible for most of the other recent advancements in teaching AI to read and write. "It's kind of surprising people in terms of what you can do with [...] more data and bigger models," says Percy Liang, a computer science professor at Stanford.
How Taylor Swift showed us the scary future of facial recognition
Taylor Swift raised eyebrows late last year when Rolling Stone magazine revealed her security team had deployed facial recognition recognition technology during her Repudiation tour to root out stalkers. But the company contracted for the efforts uses its technology to provide much more than just security. ISM Connect also uses its smart screens to capture metrics for promotion and marketing. Facial recognition, used for decades by law enforcement and militaries, is quickly becoming a commercial tool to help brands engage consumers. Swift's tour is just the latest example of the growing privacy concerns around the largely unregulated, billion-dollar industry.
World calls for international treaty to stop killer robots before rogue states acquire them
There is widespread public support for a ban on so-called "killer robots", which campaigners say would "cross a moral line" after which it would be difficult to return. Polling across 26 countries found over 60 per cent of the thousands asked opposed lethal autonomous weapons that can kill with no human input, and only around a fifth backed them. The figures showed public support was growing for a treaty to regulate these controversial new technologies - a treaty which is already being pushed by campaigners, scientists and many world leaders. However, a meeting in Geneva at the close of last year ended in a stalemate after nations including the US and Russia indicated they would not support the creation of such a global agreement. Mary Wareham of Human Rights Watch, who coordinates the Campaign to Stop Killer Robots, compared the movement to successful efforts to eradicate landmines from battlefields.
Robot mimics desert ants to find its way home without GPS
A six-legged robot can find its way home without the help of GPS, thanks to tactics borrowed from desert ants. The robot, called AntBot, uses light from the sky to judge the direction its going. To assess the distance travelled it uses a combination of observing the motion of objects on the ground as they pass by and counting steps. All three of these techniques are used by desert ants. To test AntBot, Stรฉphane Viollet at the Aix-Marseille University in France and colleagues set an outdoor homing task: first go to several checkpoints, then return home.
How Machine Learning Is Crafting Precision Medicine
Such targeted care is referred to as precision medicine--drugs or treatments designed for small groups, rather than large populations, based on characteristics such as medical history, genetic makeup, and data recorded by wearable devices. In 2003, the completion of the Human Genome Project was attended by fanatic promises about the imminence of these treatments, but results have so far underwhelmed. Today, new technologies are revitalizing the promise. Precision medicine: drugs or treatments designed for small groups, rather than large populations. At organizations ranging from large corporations to university-led and government-funded research collectives, doctors are using artificial intelligence (AI) to develop precision treatments for complex diseases.
Rethinking Medical Ethics
AI promises to be a boon to medical practice, improving diagnoses, personalizing treatment, and spotting future public-health threats. By 2024, experts predict, healthcare AI will be a nearly $20 billion market, with tools that transcribe medical records, assist surgery, and investigate insurance claims for fraud. Even so, the technology raises some knotty ethical questions. What happens when an AI system makes the wrong decision--and who is responsible if it does? How can clinicians verify, or even understand, what comes out of an AI "black box"?